Recent advances in upper limb prostheses have led to significant improvements in the number of movements provided by the robotic limb. However, the method for controlling multiple degrees of freedom via user-generated signals remains challenging. To address this issue, various machine learning controllers have been developed to better predict movement intent. As these controllers become more intelligent and take on more autonomy in the system, the traditional approach of representing the human-machine interface as a human controlling a tool becomes limiting. One possible approach to improve the understanding of these interfaces is to model them as collaborative, multi-agent systems through the lens of joint action. The field of joint action has been commonly applied to two human partners who are trying to work jointly together to achieve a task, such as singing or moving a table together, by effecting coordinated change in their shared environment. In this work, we compare different prosthesis controllers (proportional electromyography with sequential switching, pattern recognition, and adaptive switching) in terms of how they present the hallmarks of joint action. The results of the comparison lead to a new perspective for understanding how existing myoelectric systems relate to each other, along with recommendations for how to improve these systems by increasing the collaborative communication between each partner.
translated by 谷歌翻译
Curiosity for machine agents has been a focus of lively research activity. The study of human and animal curiosity, particularly specific curiosity, has unearthed several properties that would offer important benefits for machine learners, but that have not yet been well-explored in machine intelligence. In this work, we conduct a comprehensive, multidisciplinary survey of the field of animal and machine curiosity. As a principal contribution of this work, we use this survey as a foundation to introduce and define what we consider to be five of the most important properties of specific curiosity: 1) directedness towards inostensible referents, 2) cessation when satisfied, 3) voluntary exposure, 4) transience, and 5) coherent long-term learning. As a second main contribution of this work, we show how these properties may be implemented together in a proof-of-concept reinforcement learning agent: we demonstrate how the properties manifest in the behaviour of this agent in a simple non-episodic grid-world environment that includes curiosity-inducing locations and induced targets of curiosity. As we would hope, our example of a computational specific curiosity agent exhibits short-term directed behaviour while updating long-term preferences to adaptively seek out curiosity-inducing situations. This work, therefore, presents a landmark synthesis and translation of specific curiosity to the domain of machine learning and reinforcement learning and provides a novel view into how specific curiosity operates and in the future might be integrated into the behaviour of goal-seeking, decision-making computational agents in complex environments.
translated by 谷歌翻译
在此,我们描述了我们称为艾伯塔省计划的人工智能研究方法。艾伯塔省的计划是在我们在艾伯塔省的研究小组中以及全世界志趣相投的其他人中追求的。我们欢迎所有将加入我们的人参加这一追求的人。
translated by 谷歌翻译
在计算加强学习中,越来越多的作品试图通过预测未来的感觉来构建代理人对世界的看法。关于环境观察的预测用作额外的输入功能,以实现更好的目标指导决策。这项工作中的一个公开挑战是从代理商可能做出的许多预测中决定哪些预测可能最能支持决策。在连续学习问题中,这一挑战尤其明显,在这种问题上,单一的经验可以为单一的代理使用。作为主要贡献,我们介绍了一个元梯度下降过程,代理商通过该过程学习1)要做出的预测,2)其所选预测的估计值; 3)如何使用这些估计来生成最大化未来奖励的政策 - - 全部在一个持续学习的过程中。在本手稿中,我们将表达为一般价值函数的预测考虑:对未来信号积累的时间扩展估计。我们证明,通过与环境的互动,代理可以独立选择解决部分观察性的预测,从而产生类似于专业指定的GVF的性能。通过学习,而不是手动指定这些预测,我们使代理商能够以自我监督的方式确定有用的预测,从而迈向真正的自主系统。
translated by 谷歌翻译
在本文中,我们为Pavlovian信号传达的多方面的研究 - 一个过程中学到的一个过程,一个代理商通过另一个代理商通知决策的时间扩展预测。信令紧密连接到时间和时间。在生成和接收信号的服务中,已知人类和其他动物代表时间,确定自过去事件以来的时间,预测到未来刺激的时间,并且都识别和生成展开时间的模式。我们调查通过引入部分可观察到的决策域来对学习代理之间的影响和信令在我们称之为霜冻空心的情况下如何影响学习代理之间的影响和信令。在该域中,预测学习代理和加强学习代理被耦合到两部分决策系统,该系统可以在避免时间条件危险时获取稀疏奖励。我们评估了两个域变型:机器代理在七态线性步行中交互,以及虚拟现实环境中的人机交互。我们的结果展示了帕夫洛维亚信号传导的学习速度,对药剂 - 代理协调具有不同时间表示(并且不)的影响,以及颞次锯齿对药剂和人毒剂相互作用的影响方式不同。作为主要贡献,我们将Pavlovian信号传导为固定信号范例与两个代理之间完全自适应通信学习之间的天然桥梁。我们进一步展示了如何从固定的信令过程计算地构建该自适应信令处理,其特征在于,通过快速的连续预测学习和对接收信号的性质的最小限制。因此,我们的结果表明了加固学习代理之间的沟通学习的可行建设者的途径。
translated by 谷歌翻译
人工智能系统越来越涉及持续学习,以实现在系统培训期间不遇到的一般情况下的灵活性。与自治系统的人类互动广泛研究,但在系统积极学习的同时,研究发生了迄今为止发生的互动,并且可以在几分钟内明显改变其行为。在这项试验研究中,我们调查如何在代理商发展能力时如何发展人类和不断学习的预测代理人之间的互动。此外,我们可以比较两个不同的代理架构来评估代理设计中的代表性选择如何影响人工代理交互。我们开发虚拟现实环境和基于时间的预测任务,其中从增强学习(RL)算法增强人类预测中学到的预测。我们评估参与者在此任务中的性能和行为如何在代理类型中不同,使用定量和定性分析。我们的研究结果表明,系统的人类信任可能受到与代理人的早期互动的影响,并且反过来的信任会影响战略行为,但试点研究的限制排除了任何结论的声明。我们将信任作为互动的关键特征,以考虑基于RL的技术在考虑基于RL的技术时,并对这项研究进行了几项建议,以准备更大规模的调查。本文的视频摘要可以在https://youtu.be/ovyjdnbqtwq找到。
translated by 谷歌翻译
在计算强化学习中,越来越多的工作体验旨在通过关于未来感觉的预测来表达世界的代理人模型。在本手稿中,我们专注于以一般值函数表示的预测:时间延长了未来信号累积的估计。一个挑战是从无数的许多预测中确定了代理人可能会产生哪些可能支持决策的预测。在这项工作中,我们贡献了一个元梯度下降方法,代理可以直接指定它学习的预测,而独立于设计者指令。为此,我们介绍了适合这项调查的部分可观察的域名。然后,我们演示通过与环境的交互,代理可以独立地选择解决部分可观察性的预测,从而导致类似于专业选择的值函数的性能。通过学习,而不是手动指定这些预测,我们使代理能够以自我监督的方式识别有用的预测,从而迈向真正自治系统。
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
In many risk-aware and multi-objective reinforcement learning settings, the utility of the user is derived from a single execution of a policy. In these settings, making decisions based on the average future returns is not suitable. For example, in a medical setting a patient may only have one opportunity to treat their illness. Making decisions using just the expected future returns -- known in reinforcement learning as the value -- cannot account for the potential range of adverse or positive outcomes a decision may have. Therefore, we should use the distribution over expected future returns differently to represent the critical information that the agent requires at decision time by taking both the future and accrued returns into consideration. In this paper, we propose two novel Monte Carlo tree search algorithms. Firstly, we present a Monte Carlo tree search algorithm that can compute policies for nonlinear utility functions (NLU-MCTS) by optimising the utility of the different possible returns attainable from individual policy executions, resulting in good policies for both risk-aware and multi-objective settings. Secondly, we propose a distributional Monte Carlo tree search algorithm (DMCTS) which extends NLU-MCTS. DMCTS computes an approximate posterior distribution over the utility of the returns, and utilises Thompson sampling during planning to compute policies in risk-aware and multi-objective settings. Both algorithms outperform the state-of-the-art in multi-objective reinforcement learning for the expected utility of the returns.
translated by 谷歌翻译
The energy sector is facing rapid changes in the transition towards clean renewable sources. However, the growing share of volatile, fluctuating renewable generation such as wind or solar energy has already led to an increase in power grid congestion and network security concerns. Grid operators mitigate these by modifying either generation or demand (redispatching, curtailment, flexible loads). Unfortunately, redispatching of fossil generators leads to excessive grid operation costs and higher emissions, which is in direct opposition to the decarbonization of the energy sector. In this paper, we propose an AlphaZero-based grid topology optimization agent as a non-costly, carbon-free congestion management alternative. Our experimental evaluation confirms the potential of topology optimization for power grid operation, achieves a reduction of the average amount of required redispatching by 60%, and shows the interoperability with traditional congestion management methods. Our approach also ranked 1st in the WCCI 2022 Learning to Run a Power Network (L2RPN) competition. Based on our findings, we identify and discuss open research problems as well as technical challenges for a productive system on a real power grid.
translated by 谷歌翻译